References
•Abou Baker, N. and Handmann, U., 2024. One size does not fit all in evaluating model selection scores for image classification. Scientific
Reports, 14(1), p.30239.
•Alam, T.S., Jowthi, C.B. and Pathak, A., 2024. Comparing pre-trained models for efficient leaf disease detection: a study on custom
CNN. Journal of Electrical Systems and Information Technology, 11(1), p.12.
•Elhanashi, A., Saponara, S., Zheng, Q., Almutairi, N., Singh, Y., Kuanar, S., Ali, F., Unal, O. and Faghani, S., 2025. AI-Powered Object
Detection in Radiology: Current Models, Challenges, and Future Direction. Journal of Imaging, 11(5), p.141.
•Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A. and Brendel, W., 2018, November. ImageNet-trained CNNs are
biased towards texture; increasing shape bias improves accuracy and robustness. In International conference on learning
representations.
•Krizhevsky, A., Nair, V. and Hinton, G. (2009) CIFAR-10 (Canadian Institute for Advanced Research) dataset. Toronto: University of
Toronto. Available at: https://www.cs.toronto.edu/~kriz/cifar.html (Accessed: 9 October 2025).
•Wang, D., Wang, J.G. and Xu, K., 2021. Deep learning for object detection, classification and tracking in industry applications. Sensors
(Basel, Switzerland),21(21), p.7349.
•Xu, Y. and Goodacre, R., 2018. On splitting training and validation set: a comparative study of cross-validation, bootstrap and systematic
sampling for estimating the generalization performance of supervised learning. Journal of analysis and testing,2(3), pp.249-262.